Media Lab

By Chris Gulker

The MIT Media Lab was kind enough to arrange for me and boss Lisa Wellman to visit one afternoon during Seybold. 

The building hummed with activity, dripped wires, sensors, hardware, toys: there's a low subliminal buzz as people code the
future line-by-line on Macs and exotic workstations. 

The stuff you see is incredible: the Visible Language Workshop demos things like using 3D text displayed by a Reality
Engine to rapidly, easily navigate huge conceptual spaces. 

Grad student Yin Yin Wong showed a giant monitor, 6 feet wide that can display news headlines as they happen
superimposed on a high res map of the planet - other juxtapositions of news, geography, time, search criteria are possible:
unguessed-at relationships appear as novel arrangements are called up. She also demoed layout programs that automatically
format pictures and text into different styles, like Wired or Scientific American, without human help. 

                           Max Metral of The Autonomous Agents Group showed us a videotape of an agent
                           program that behaves like a dog and demoed some Web-based agent programs. 

                           The agent dog is projected onto a large video screen along with an image of the visitor. The
                           dog, a completely autonomous program with a cute cartoony persona on-screen, acts
                           without a script or a human operator. It will fetch a ball, play, sleep and otherwise behave
                           believably. The dog agent relies upon developments in numerous areas of investigation at the
                           lab to interpret the visitor's motions, learn behavior et al. 

Max also showed us 2 Web-based agents, HOMR, which recommends musical selections and Webhound, which recommends
interesting URLs. Both programs learn by watching what humans do. 
